124 research outputs found

    The Role of Human Fallibility in Psychological Research:A Survey of Mistakes in Data Management

    Get PDF
    Errors are an inevitable consequence of human fallibility, and researchers are no exception. Most researchers can recall major frustrations or serious time delays due to human errors while collecting, analyzing, or reporting data. The present study is an exploration of mistakes made during the data-management process in psychological research. We surveyed 488 researchers regarding the type, frequency, seriousness, and outcome of mistakes that have occurred in their research team during the last 5 years. The majority of respondents suggested that mistakes occurred with very low or low frequency. Most respondents reported that the most frequent mistakes led to insignificant or minor consequences, such as time loss or frustration. The most serious mistakes caused insignificant or minor consequences for about a third of respondents, moderate consequences for almost half of respondents, and major or extreme consequences for about one fifth of respondents. The most frequently reported types of mistakes were ambiguous naming/defining of data, version control error, and wrong data processing/analysis. Most mistakes were reportedly due to poor project preparation or management and/or personal difficulties (physical or cognitive constraints). With these initial exploratory findings, we do not aim to provide a description representative for psychological scientists but, rather, to lay the groundwork for a systematic investigation of human fallibility in research data management and the development of solutions to reduce errors and mitigate their impact

    SampleSizePlanner:A Tool to Estimate and Justify Sample Size for Two-Group Studies

    Get PDF
    Planning sample size often requires researchers to identify a statistical technique and to make several choices during their calculations. Currently, there is a lack of clear guidelines for researchers to find and use the applicable procedure. In the present tutorial, we introduce a web app and R package that offer nine different procedures to determine and justify the sample size for independent two-group study designs. The application highlights the most important decision points for each procedure and suggests example justifications for them. The resulting sample-size report can serve as a template for preregistrations and manuscripts

    Phasic affective signals by themselves do not regulate cognitive control

    Get PDF
    Cognitive control is a set of mechanisms that help us process conflicting stimuli and maintain goal-relevant behaviour. According to the Affective Signalling Hypothesis, conflicting stimuli are aversive and thus elicit (negative) affect, moreover – to avoid aversive signals – affective and cognitive systems work together by increasing control and thus, drive conflict adaptation. Several studies have found that affective stimuli can indeed modulate conflict adaptation, however, there is currently no evidence that phasic affective states not triggered by conflict also trigger improved cognitive control. To investigate this possibility, we intermixed trials of a conflict task and trials involving the passive viewing of emotional words. We tested whether affective states induced by affective words in a given trial trigger improved cognitive control in a subsequent conflict trial. Applying Bayesian analysis, the results of four experiments supported the lack of adaptation to aversive signals, both in terms of valence and arousal. These results suggest that phasic affective states by themselves are not sufficient to elicit an increase in control

    Quantifying Support for the Null Hypothesis in Psychology: An Empirical Investigation

    Get PDF
    In the traditional statistical framework, nonsignificant results leave researchers in a state of suspended disbelief. In this study, we examined, empirically, the treatment and evidential impact of nonsignificant results. Our specific goals were twofold: to explore how psychologists interpret and communicate nonsignificant results and to assess how much these results constitute evidence in favor of the null hypothesis. First, we examined all nonsignificant findings mentioned in the abstracts of the 2015 volumes of Psychonomic Bulletin & Review, Journal of Experimental Psychology: General, and Psychological Science (N = 137). In 72% of these cases, nonsignificant results were misinterpreted, in that the authors inferred that the effect was absent. Second, a Bayes factor reanalysis revealed that fewer than 5% of the nonsignificant findings provided strong evidence (i.e., BF01 > 10) in favor of the null hypothesis over the alternative hypothesis. We recommend that researchers expand their statistical tool kit in order to correctly interpret nonsignificant results and to be able to evaluate the evidence for and against the null hypothesis

    A Consensus-Based Transparency Checklist

    Get PDF
    We present a consensus-based checklist to improve and document the transparency of research reports in social and behavioural research. An accompanying online application allows users to complete the form and generate a report that they can submit with their manuscript or post to a public repository
    • …
    corecore